13 research outputs found

    Integrating Perception, Prediction and Control for Adaptive Mobile Navigation

    Get PDF
    Mobile robots capable of navigating seamlessly and safely in pedestrian rich environments promise to bring robotic assistance closer to our daily lives. A key limitation of existing navigation policies is the difficulty to predict and reason about the environment including static obstacles and pedestrians. In this thesis, I explore three properties of navigation including prediction of occupied spaces, prediction of pedestrians and measurements of uncertainty to improve crowd-based navigation. The hypothesis is that improving prediction and uncertainty estimation will increase robot navigation performance resulting in fewer collisions, faster speeds and lead to more socially-compliant motion in crowds. Specifically, this thesis focuses on techniques that allow mobile robots to predict occupied spaces that extend beyond the line of sight of the sensor. This is accomplished through the development of novel generative neural network architectures that enable map prediction that exceed the limitations of the sensor. Further, I extend the neural network architectures to predict multiple hypotheses and use the variance of the hypotheses as a measure of uncertainty to formulate an information-theoretic map exploration strategy. Finally, control algorithms that leverage the predicted occupancy map were developed to demonstrate more robust, high-speed navigation on a physical small form factor autonomous car. I further extend the prediction and uncertainty approaches to include modeling pedestrian motion for dynamic crowd navigation. This includes developing novel techniques that model human intent to predict future motion of pedestrians. I show this approach improves state-of-the-art results in pedestrian prediction. I then show errors in prediction can be used as a measure of uncertainty to adapt the risk sensitivity of the robot controller in real time. Finally, I show that the crowd navigation algorithm extends to socially compliant behavior in groups of pedestrians. This research demonstrates that combining obstacle and pedestrian prediction with uncertainty estimation achieves more robust navigation policies. This approach results in improved map exploration efficiency, faster robot motion, fewer number of collisions and more socially compliant robot motion within crowds

    Learning a Group-Aware Policy for Robot Navigation

    Full text link
    Human-aware robot navigation promises a range of applications in which mobile robots bring versatile assistance to people in common human environments. While prior research has mostly focused on modeling pedestrians as independent, intentional individuals, people move in groups; consequently, it is imperative for mobile robots to respect human groups when navigating around people. This paper explores learning group-aware navigation policies based on dynamic group formation using deep reinforcement learning. Through simulation experiments, we show that group-aware policies, compared to baseline policies that neglect human groups, achieve greater robot navigation performance (e.g., fewer collisions), minimize violation of social norms and discomfort, and reduce the robot's movement impact on pedestrians. Our results contribute to the development of social navigation and the integration of mobile robots into human environments.Comment: 8 pages, 4 figure

    Learning a Group-Aware Policy for Robot Navigation

    No full text

    A collaborative BCI approach to autonomous control of a prosthetic limb system

    No full text
    Existing brain-computer interface (BCI) control of highly dexterous robotic manipulators and prosthetic devices typically rely solely on neural decode algorithms to determine the user's intended motion. Although these approaches have made significant progress in the ability to control high degree of freedom (DOF) manipulators, the ability to perform activities of daily living (ADL) is still an ongoing research endeavor. In this paper, we describe a hybrid system that combines elements of autonomous robotic manipulation with neural decode algorithms to maneuver a highly dexterous robotic manipulator for a reach and grasp task. This system was demonstrated using a human patient with cortical micro-electrode arrays allowing the user to manipulate an object on a table and place it at a desired location. The preliminary results for this system are promising in that it demonstrates the potential to blend robotic control to perform lower level manipulation tasks with neural control that allows the user to focus on higher level tasks thereby reducing the cognitive load and increasing the success rate of performing ADL type activities
    corecore